49,157 research outputs found

    Incompressibility in finite nuclei and nuclear matter

    Full text link
    The incompressibility (compression modulus) K0K_{\rm 0} of infinite symmetric nuclear matter at saturation density has become one of the major constraints on mean-field models of nuclear many-body systems as well as of models of high density matter in astrophysical objects and heavy-ion collisions. We present a comprehensive re-analysis of recent data on GMR energies in even-even 112−124^{\rm 112-124}Sn and 106,100−116^{\rm 106,100-116}Cd and earlier data on 58 ≤\le A ≤\le 208 nuclei. The incompressibility of finite nuclei KAK_{\rm A} is expressed as a leptodermous expansion with volume, surface, isospin and Coulomb coefficients KvolK_{\rm vol}, KsurfK_{\rm surf}, KτK_\tau and KcoulK_{\rm coul}. \textit{Assuming} that the volume coefficient KvolK_{\rm vol} is identified with K0K_{\rm 0}, the KcoulK_{\rm coul} = -(5.2 ±\pm 0.7) MeV and the contribution from the curvature term Kcurv_{\rm curv}A−2/3^{\rm -2/3} in the expansion is neglected, compelling evidence is found for K0K_{\rm 0} to be in the range 250 <K0< < K_{\rm 0} < 315 MeV, the ratio of the surface and volume coefficients c=Ksurf/Kvolc = K_{\rm surf}/K_{\rm vol} to be between -2.4 and -1.6 and KτK_{\rm \tau} between -840 and -350 MeV. We show that the generally accepted value of K0K_{\rm 0} = (240 ±\pm 20) MeV can be obtained from the fits provided c∼c \sim -1, as predicted by the majority of mean-field models. However, the fits are significantly improved if cc is allowed to vary, leading to a range of K0K_{\rm 0}, extended to higher values. A self-consistent simple (toy) model has been developed, which shows that the density dependence of the surface diffuseness of a vibrating nucleus plays a major role in determination of the ratio Ksurf/Kvol_{\rm surf}/K_{\rm vol} and yields predictions consistent with our findings.Comment: 26 pages, 13 figures; corrected minor typos in line with the proof in Phys. Rev.

    Computer systems: What the future holds

    Get PDF
    Developement of computer architecture is discussed in terms of the proliferation of the microprocessor, the utility of the medium-scale computer, and the sheer computational power of the large-scale machine. Changes in new applications brought about because of ever lowering costs, smaller sizes, and faster switching times are included

    Parallel tridiagonal equation solvers

    Get PDF
    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases

    Fluid sample collector Patent

    Get PDF
    Design and development of fluid sample collecto

    Precision in the perception of direction of a moving pattern

    Get PDF
    The precision of the model of pattern motion analysis put forth by Adelson and Movshon (1982) who proposed that humans determine the direction of a moving plaid (the sum of two sinusoidal gratings of different orientations) in two steps is qualitatively examined. The volocities of the grating components are first estimated, then combined using the intersection of constraints to determine the velocity of the plaid as a whole. Under the additional assumption that the noise sources for the component velocities are independent, an approximate expression can be derived for the precision in plaid direction as a function of the precision in the speed and direction of the components. Monte Carlo simulations verify that the expression is valid to within 5 percent over the natural range of the parameters. The expression is then used to predict human performance based on available estimates of human precision in the judgment of single component speed. Human performance is predicted to deteriorate by a factor of 3 as half the angle between the wavefronts (theta) decreases from 60 to 30 deg, but actual performance does not. The mean direction discrimination for three human observers was 4.3 plus or minus 0.9 deg (SD) for theta = 60 deg and 5.9 plus or minus 1.2 for theta = 30 deg. This discrepancy can be resolved in two ways. If the noises in the internal representations of the component speeds are smaller than the available estimates or if these noises are not independent, then the psychophysical results are consistent with the Adelson-Movshon hypothesis

    Modeling Hybrid Stars

    Full text link
    We study the so called hybrid stars, which are hadronic stars that contain a core of deconfined quarks. For this purpose, we make use of an extended version of the SU(3) chiral model. Within this approach, the degrees of freedom change naturally from hadrons (baryon octet) to quarks (u, d, s) as the temperature and/or density increases. At zero temperature we are still able to reproduce massive stars, even with the inclusion of hyperons.Comment: To appear in the proceedings of Conference C12-08-0

    Nonlinear Evolution of the Magnetohydrodynamic Rayleigh-Taylor Instability

    Full text link
    We study the nonlinear evolution of the magnetic Rayleigh-Taylor instability using three-dimensional MHD simulations. We consider the idealized case of two inviscid, perfectly conducting fluids of constant density separated by a contact discontinuity perpendicular to the effective gravity g, with a uniform magnetic field B parallel to the interface. Modes parallel to the field with wavelengths smaller than l_c = [B B/(d_h - d_l) g] are suppressed (where d_h and d_l are the densities of the heavy and light fluids respectively), whereas modes perpendicular to B are unaffected. We study strong fields with l_c varying between 0.01 and 0.36 of the horizontal extent of the computational domain. Even a weak field produces tension forces on small scales that are significant enough to reduce shear (as measured by the distribution of the amplitude of vorticity), which in turn reduces the mixing between fluids, and increases the rate at which bubbles and finger are displaced from the interface compared to the purely hydrodynamic case. For strong fields, the highly anisotropic nature of unstable modes produces ropes and filaments. However, at late time flow along field lines produces large scale bubbles. The kinetic and magnetic energies transverse to gravity remain in rough equipartition and increase as t^4 at early times. The growth deviates from this form once the magnetic energy in the vertical field becomes larger than the energy in the initial field. We comment on the implications of our results to Z-pinch experiments, and a variety of astrophysical systems.Comment: 25 pages, accepted by Physics of Fluids, online version of journal has high resolution figure

    Measurement of sigma_Total in e+e- Annihilations Below 10.56 GeV

    Full text link
    Using the CLEO III detector, we measure absolute cross sections for e+e- -> hadrons at seven center-of-mass energies between 6.964 and 10.538 GeV. R, the ratio of hadronic and muon pair production cross sections, is measured at these energies with a r.m.s. error <2% allowing determinations of the strong coupling alpha_s. Using the expected evolution of alpha_s with energy we find alpha_s(M_Z^2)=0.126 +/- 0.005 ^{+0.015}_{-0.011}, and Lambda=0.31^{+0.09+0.29}_{-0.08-0.21}.Comment: Comments: Presented at "The 2007 Europhysics Conference on High Energy Physics," Manchester, England, 19-25 July 2007, to appear in the proceedings. Three pages, 1 figur

    Effect of contrast on human speed perception

    Get PDF
    This study is part of an ongoing collaborative research effort between the Life Science and Human Factors Divisions at NASA ARC to measure the accuracy of human motion perception in order to predict potential errors in human perception/performance and to facilitate the design of display systems that minimize the effects of such deficits. The study describes how contrast manipulations can produce significant errors in human speed perception. Specifically, when two simultaneously presented parallel gratings are moving at the same speed within stationary windows, the lower-contrast grating appears to move more slowly. This contrast-induced misperception of relative speed is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g., a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). The misperception is large: a 70 percent contrast grating must, on average, be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, it is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, the relative orientation of the two gratings is important. Finally, the effect depends on the temporal presentation of the stimuli: the effects of contrast on perceived speed appears lessened when the stimuli to be matched are presented sequentially. These data constrain both physiological models of visual cortex and models of human performance. We conclude that viewing conditions that effect contrast, such as fog, may cause significant errors in speed judgments
    • …
    corecore